DALDA, which stands for Data Augmentation Leveraging Diffusion Model and LLM with Adaptive Guidance Scaling, is a framework designed to enhance data augmentation techniques, particularly in scenarios where data is scarce. This innovative approach utilizes both a Large Language Model (LLM) and a Diffusion Model (DM) to generate semantically rich images. By embedding novel semantic information into text prompts through the LLM and employing real images as visual prompts, DALDA effectively addresses the challenge of generating samples that remain within the target distribution. The installation process for DALDA involves creating a virtual environment and installing necessary dependencies. Users are guided to set up a conda environment, activate it, and install the required packages from a requirements file. Additionally, users are instructed to download specific models and datasets, including the Flowers102, Oxford Pets, and Caltech101 datasets, with detailed commands provided for each. To generate prompts using the LLM, specifically GPT-4, users must create a configuration file that includes their Azure endpoint and API key. Once the environment is set up, prompts can be generated by executing a designated script. The framework also includes functionality for training classifiers, with instructions on how to run the training script and utilize a resume feature for ongoing training sessions. The development of DALDA draws on existing code from projects like DA-Fusion and integrates components from IP-Adapter, diffusers, and CLIP, ensuring compliance with their respective licenses. The repository is publicly accessible on GitHub, where it has garnered attention with stars and forks, indicating community interest and collaboration. The project is positioned within the broader context of data augmentation, synthetic data generation, and the application of diffusion models and large language models in machine learning.